Conversation
Summary of ChangesHello @akremenetsky, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the backup capabilities of the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces backup drivers for ZFS volumes, for both local and S3 storage. The implementation is comprehensive, adding an abstract base class for ZFS backupers and concrete implementations for local and S3 targets. The changes are generally well-structured.
However, I've identified several critical and high-severity issues that should be addressed. These include a command injection vulnerability, a resource leak bug that can leave ZFS snapshots behind, incorrect coupling between modules, and some unimplemented features that are presented as available. I've provided detailed comments and suggestions for each of these points.
a7a9197 to
0f7b3d4
Compare
Two drivers added: - LocalZFSBackuper - to backup on local storage - S3ZFSBackuper - to backup on S3 Signed-off-by: Anton Kremenetsky <[email protected]>
0f7b3d4 to
df6a311
Compare
| [ | ||
| "sudo", | ||
| "zfs", | ||
| "send", |
There was a problem hiding this comment.
Maybe for future:
zfs send by default generates raw data with maximum compatibility with pretty old ZFS versions, i.e. if compression was on - it will decompress data. And vice versa - on zfs recv it will compress data again.
Compression is pretty interesting case, because it's easy to write really compressible data, which on backup will be inflated here by multiple size! (I saw example with x1600! so, 2GB of compressed junk of vim tmp file gave 2TB of inflight data). We should think about it too.
Good flags to use: -Lec:
-L- use "large" zfs blocks (i.e. support latest ZFS feature)-euse embedded blocks (<~100bytes files may be written in block pointer itself)-cdon't decompress, send as-is
Last useful flag - -w/--raw - if native ZFS encryption used - it will send encrypted data as is, so keys are not needed at all.
I think we don't need to change anything now, but at least see this comment for future reference @akremenetsky
There was a problem hiding this comment.
Oh, and additional note - in future we may support incremental backup too, you can set -I pool/dataset@parent_snap_name for that.
There was a problem hiding this comment.
Ah, to be safe from decompress inflation - we may check zfs zvols' compressratio. If it's > x100 - something's pretty nasty.
| ) | ||
|
|
||
| if encryption: | ||
| utils.encrypt_file(target_path, encryption.key, encryption.iv) |
There was a problem hiding this comment.
Again - for future:
I propose to think about make a chain of different steps, maybe via pipes as a start. So we won't need much temporary space and time.
|
|
||
| self.backup_domains(backup_path, list(domains), encryption) | ||
|
|
||
| if not compress: |
There was a problem hiding this comment.
Compress is effective only before encryption, encrypted data is nearly incompressible. Maybe it's a little bit offtopic for this PR, but FYI.
Some security guys say that you should not even compress data before encryption at all, but it's not a practical way.
I see best case as - compress beforce, encrypt at the end.
Two drivers added:
Closes #72